Goto

Collaborating Authors

 international agreement


AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions

Barnett, Peter, Scher, Aaron

arXiv.org Artificial Intelligence

Humanity appears to be on course to soon develop AI systems that substantially outperform human experts in all cognitive domains and activities. We believe the default trajectory has a high likelihood of catastrophe, including human extinction. Risks come from failure to control powerful AI systems, misuse of AI by malicious rogue actors, war between great powers, and authoritarian lock-in. This research agenda has two aims: to describe the strategic landscape of AI development and to catalog important governance research questions. These questions, if answered, would provide important insight on how to successfully reduce catastrophic risks. We describe four high-level scenarios for the geopolitical response to advanced AI development, cataloging the research questions most relevant to each. Our favored scenario involves building the technical, legal, and institutional infrastructure required to internationally restrict dangerous AI development and deployment (which we refer to as an Off Switch), which leads into an internationally coordinated Halt on frontier AI activities at some point in the future. The second scenario we describe is a US National Project for AI, in which the US Government races to develop advanced AI systems and establish unilateral control over global AI development. We also describe two additional scenarios: a Light-Touch world similar to that of today and a Threat of Sabotage situation where countries use sabotage and deterrence to slow AI development. In our view, apart from the Off Switch and Halt scenario, all of these trajectories appear to carry an unacceptable risk of catastrophic harm. Urgent action is needed from the US National Security community and AI governance ecosystem to answer key research questions, build the capability to halt dangerous AI activities, and prepare for international AI agreements.


International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty

Scholefield, Rebecca, Martin, Samuel, Barten, Otto

arXiv.org Artificial Intelligence

The malicious use or malfunction of advanced general-purpose AI (GPAI) poses risks that, according to leading experts, could lead to the 'marginalisation or extinction of humanity.' To address these risks, there are an increasing number of proposals for international agreements on AI safety. In this paper, we review recent (2023-) proposals, identifying areas of consensus and disagreement, and drawing on related literature to assess their feasibility. We focus our discussion on risk thresholds, regulations, types of international agreement and five related processes: building scientific consensus, standardisation, auditing, verification and incentivisation. Based on this review, we propose a treaty establishing a compute threshold above which development requires rigorous oversight. This treaty would mandate complementary audits of models, information security and governance practices, overseen by an international network of AI Safety Institutes (AISIs) with authority to pause development if risks are unacceptable. Our approach combines immediately implementable measures with a flexible structure that can adapt to ongoing research.


Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance

Wasil, Akash R., Barnett, Peter, Gerovitch, Michael, Hauksson, Roman, Reed, Tom, Miller, Jack William

arXiv.org Artificial Intelligence

International AI governance agreements and institutions may play an important role in reducing global security risks from advanced AI. To inform the design of such agreements and institutions, we conducted case studies of historical and contemporary international security agreements. We focused specifically on those arrangements around dual-use technologies, examining agreements in nuclear security, chemical weapons, biosecurity, and export controls. For each agreement, we examined four key areas: (a) purpose, (b) core powers, (c) governance structure, and (d) instances of non-compliance. From these case studies, we extracted lessons for the design of international AI agreements and governance institutions. We discuss the importance of robust verification methods, strategies for balancing power between nations, mechanisms for adapting to rapid technological change, approaches to managing trade-offs between transparency and security, incentives for participation, and effective enforcement mechanisms.


Verification methods for international AI agreements

Wasil, Akash R., Reed, Tom, Miller, Jack William, Barnett, Peter

arXiv.org Artificial Intelligence

What techniques can be used to verify compliance with international agreements about advanced AI development? In this paper, we examine 10 verification methods that could detect two types of potential violations: unauthorized AI training (e.g., training runs above a certain FLOP threshold) and unauthorized data centers. We divide the verification methods into three categories: (a) national technical means (methods requiring minimal or no access from suspected non-compliant nations), (b) access-dependent methods (methods that require approval from the nation suspected of unauthorized activities), and (c) hardware-dependent methods (methods that require rules around advanced hardware). For each verification method, we provide a description, historical precedents, and possible evasion techniques. We conclude by offering recommendations for future work related to the verification and enforcement of international AI governance agreements.


How the U.N. Plans to Shape the Future of AI

TIME - Tech

As the United Nations General Assembly gathered this week in New York, the U.N. Secretary-General's envoy on technology, Amandeep Gill, hosted an event titled Governing AI for Humanity, where participants discussed the risks that AI might pose and the challenges of achieving international cooperation on artificial intelligence. Secretary-General António Guterres and Gill have said they believe that a new U.N. agency will be required to help the world cooperate in managing this powerful technology. But the issues that the new entity would seek to address and its structure are yet to be determined, and some observers say that ambitious plans for global cooperation like this rarely get the required support of powerful nations. Gill has led efforts to make advanced forms of technology safer before. He was chair of the Group of Governmental Experts of the Convention on Certain Conventional Weapons when the Campaign to Stop Killer Robots, which sought to compel governments to outlaw the development of lethal autonomous weapons systems, failed to gain traction with global superpowers including the U.S. and Russia.


AI requires 'new generation' of arms control deal to govern future warfighting, says Marine veteran lawmaker

FOX News

Tom Newhouse, vice president of Convergence Media, discusses the potential impact of artificial intelligence on elections after an RNC AI ad garnered attention. A Marine veteran lawmaker says the U.S. should be pushing for a new international agreement to govern the use of artificial intelligence on the battlefield and believes it's a "strategic mistake" the Pentagon hasn't started this important task. Rep. Seth Moulton, D-Mass., said the U.S. needs to work with other military powers to flesh out rules of the road on how AI can and cannot be deployed by military forces before AI becomes much more advanced. "When we get to the point of having killer robots, it's going to be a real problem for us if we don't have some established international norms for their use," Moulton told Fox News Digital. "Adversaries like China and Russia -- which don't care about collateral damage, they don't care about civilian casualties, they don't care about human rights -- they're going to have an advantage in making their robots more lethal because they'll be less constrained."


Exposure to smog in childhood can affect your cognitive skills up to 60 years later, study warns

Daily Mail - Science & tech

Being exposed to smog and air pollution while a child can damage your cognitive skills up to six decades after you were exposed, a new study has warned. University of Edinburgh experts tested the general intelligence of over 500 people aged 70 using a test the same group completed when they were 11 years old. They also examined where the group had lived throughout their life to estimate the levels of air pollution they were exposed to during their early childhood. Those volunteers who had been exposed to air pollution as a child had suffered a small - but detectable - level of cognitive decline between the age of 11 and 70. Researchers behind this study didn't examine why air pollution appears to cause reduced cognitive skill, but previous studies have found it may be due to metal-toting particles in the pollution reaching the brain and damaging neurons.


Prioritise, specialise, mobilise: advancing international collaboration in AI research and policy In Verba

#artificialintelligence

As AI technologies advance at pace, governments across the world are developing national strategies to harness their economic benefits for national advantage and societal well-being. The potential of AI to improve the lives and livelihoods of people across the world is significant, from helping address climate change and food security, to supporting elderly care and healthcare delivery. However, these technologies also have the potential to reinforce social divisions and inequalities, with implications for those already at the margins of society. Advances in science have long relied on international flows of people and ideas. At a time when questions about how the benefits of technological progress are shared across society are at the fore of public and policy debate, maintaining strong international collaborations can play an important role in connecting communities and ensuring that technology advances in ways that benefit all in society.


sorry-banning-killer-robots-just-isnt-practical

WIRED

That's not because it's impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. But it hasn't suggested it would be open to international agreement banning autonomous weapons. In 2015, the UK government responded to calls for a ban on autonomous weapons by saying there was no need for one, and that existing international law was sufficient.